40 - Lecture_09_3_Total_Variation_Regularization [ID:39118]
50 von 122 angezeigt

Hi, the last subsection of this chapter will be total variation regularisation, and as

always we consider in this problem f is equal to a u plus some additive noise usually, where

u is a vector Rn and we interpret this vector as point wise evaluation or as a discretization

of something continuous. So u is supposed to be something like g of x1, g of x2, g of

xn. This is not necessary, but in the applications we are looking at right now this makes sense.

So for example this could be an image. And then there is some kind of spatial structure

and u1 and u2 are adjacent pixels. So this means that those entries u1, u2, u3 are spatially

connected and they are supposed to be close together. It could be one d-image or two d-image

or a movie or something. And so far we have looked at the following regularisation techniques

The first one was Tikhanov where we minimised a misfit functional, so we always minimise

u1 over u plus some term of this type. And I am going to be specific and I mean the Euclidean

norm, which means that the Euclidean norm squared is the sum of the entries squared.

So this is what I mean by the 2 norm. We have also looked at generalised Tikhanov and generalised

Tikhanov was, well we didn't exactly specify what generalised Tikhanov means, but we looked

at some variations of that. So for example the most specific and most general at the

same time form is something like lambda half and then we had a differential operator and

then we might have an empirical mean value of some sort. So usually L is a discretised

differential operator, i.e. in this setting here above L would be 1 over delta x times

minus 1, 1, 0, 0, 0, 0, minus 1, 1, 0 and so on. So we have minus 1's on the main diagonal

like this and 1's here and 0's here and for periodic images or periodic functions we have

a periodic differential operator which works like this. So let me just complete the entries.

Okay, looks like that. So this is what we typically will use for the differential operator.

We could also use a centralised difference operator, something like that, but this works.

And the problem with this approach that, well it's not a problem, but it's a feature and

a problem at the same time, this enforces a certain smoothness. So the term LU2 squared

enforces or minimising, let's say minimising, this enforces a smoothness on U. So for example

if you use an image this means that there are no sharp interfaces but we always have

smooth curves. And we will see this in the exercises probably, but this is indeed the

case and this is not always nice. It's a regularisation method that is useful in some cases but it's

very not useful in image cases because we as humans we like images to have sharp edges,

sharp contrasts so we can see what's on the image. So sometimes, especially if U is an

object, we don't want smooth reconstructions but we favour sharp interfaces instead. So

what do I mean by that? This is bad and this is nice. So we like black and white images,

we don't like grey smoothed out images because we can't see where the foreground or the background

is. Okay, so what do we do in this case? So in such applications we can devise an alternative

penalisation method which is we minimise, as always we have a data misfit functional,

obviously we want the reconstruction to be somehow close to the original image but then

this penalisation term is not the square two norm of either the image itself or of some

differential of the image but instead we take lambda times LU and now we take the one norm

and we don't take a square here but we take just the one norm where the one norm of a

vector is defined as sum over all entries as absolute values. So contrast this with

the two norm where we sum all the squares up and we call the minimiser of this approach

the so called TV reconstruction and this is the total variation approach, so TV for total

variation. The reason for that is because this is a discretisation of the so called

total variation from image processing. I don't want to get into details here, we would need

a lot of analysis and some measure theory and I'm not willing to go into that rabbit

hole for now. So just keep that in mind, it's just notation at this point, we'll be sticking

with the discretised version, if you're doing analysis on that you have to look at functions

and there you will need this object called total variation which is kind of similar to

a norm and again L is the same matrix as above, so it's a discretised differential and the

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:19:32 Min

Aufnahmedatum

2021-12-09

Hochgeladen am

2021-12-09 13:56:04

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen